Skip to content

set workers to 2#2970

Merged
JacobCoffee merged 1 commit intomainfrom
gunicorn-workers
Mar 21, 2026
Merged

set workers to 2#2970
JacobCoffee merged 1 commit intomainfrom
gunicorn-workers

Conversation

@JacobCoffee
Copy link
Member

Description

  • Set workers to 2 to see if itll help memory

Copilot AI review requested due to automatic review settings March 21, 2026 01:48
@JacobCoffee JacobCoffee merged commit c84ea20 into main Mar 21, 2026
13 of 14 checks passed
@JacobCoffee JacobCoffee deleted the gunicorn-workers branch March 21, 2026 01:48
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adjusts the Gunicorn runtime configuration to explicitly set the number of worker processes, with the stated goal of testing whether it improves memory usage.

Changes:

  • Add an explicit workers = 2 setting to gunicorn.conf.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -1,5 +1,6 @@
bind = 'unix:/var/run/cabotage/nginx.sock'
backlog = 1024
workers = 2
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says setting workers to 2 may help memory, but in this repo Gunicorn is started via gunicorn -c gunicorn.conf ... (see Procfile), so without an explicit value Gunicorn would default to 1 worker. Adding workers = 2 will generally increase the number of processes and can increase memory usage (and could also change throughput/latency characteristics). Consider either keeping workers at 1 for the memory experiment, or making it environment-configurable (e.g., via an env var) so you can tune per deployment without code changes.

Copilot uses AI. Check for mistakes.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: be8edfd139

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@@ -1,5 +1,6 @@
bind = 'unix:/var/run/cabotage/nginx.sock'
backlog = 1024
workers = 2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid forcing two Gunicorn workers to fix memory pressure

If the cabotage web process is already close to its memory limit, workers = 2 moves it in the wrong direction: Gunicorn will fork a second Django worker on every web start, so request-time heap growth is duplicated even with preload_app = True, and pydotorg/settings/cabotage.py:11-12 also keeps a separate 600-second DB connection per worker. In the exact “help memory” scenario from the PR description, this increases per-instance RSS and connection usage rather than reducing them, making OOMs more likely.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants